22 research outputs found

    Concurrent semantic priming and lexical interference for close semantic relations in blocked-cyclic picture naming:Electrophysiological signatures

    Get PDF
    In the present study, we employed event-related brain potentials to investigate the effects of semantic similarity on different planning stages during language production. We manipulated semantic similarity by controlling feature overlap within taxonomical hierarchies. In a blocked-cyclic naming task, participants named pictures in repeated cycles, blocked in semantically close, distant, or unrelated conditions. Only closely related items, but not distantly related items, induced semantic blocking effects. In the first presentation cycle, naming was facilitated, and amplitude modulations in the N1 component around 140–180 ms post-stimulus onset predicted this behavioral facilitation. In contrast, in later cycles, naming was delayed, and a negative-going posterior amplitude modulation around 250–350 ms post-stimulus onset predicted this interference. These findings indicate easier object recognition or identification underlying initial facilitation and increased difficulties during lexical selection. The N1 modulation was reduced but persisted in later cycles in which interference dominated, and the posterior negativity was also present in cycle 1 in which facilitation dominated, demonstrating concurrent effects of conceptual priming and lexical interference in all naming cycles. Our assumptions about the functional role these two opposing forces play in producing semantic context effects are further supported by the finding that the joint modulation of these two ERPs on naming latency exclusively emerged when naming closely related, but not unrelated items. The current findings demonstrate that close relations, but not distant taxonomic relations, induce stronger semantic blocking effects, and that temporally overlapping electrophysiological signatures reflect a trade-off between facilitatory priming and interfering lexical competition.Peer Reviewe

    Robots facilitate human language production

    Get PDF
    Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. Here, we focus on a central tool for social interaction: verbal communication. We assess the extent to which humans co-represent (simulate and predict) a robot’s verbal actions. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics). Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. Here, with the robot, the partner-elicited inhibitory effects were not observed. Instead, naming was facilitated, as revealed by faster naming of word categories co-named with the robot. This facilitation suggests that robots, unlike humans, are not simulated down to the level of lexical selection. Instead, a robot’s speaking appears to be simulated at the initial level of language production where the meaning of the verbal message is generated, resulting in facilitated language production due to conceptual priming. We conclude that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking.Peer Reviewe

    Multimodal reinforcement learning for partner specific adaptation in robot-multi-robot interaction

    No full text
    Successful and efficient teamwork requires knowledge of the individual team members' expertise. Such knowledge is typically acquired in social interaction and forms the basis for socially intelligent, partner-Adapted behavior. This study aims to implement this ability in teams of multiple humanoid robots. To this end, a humanoid robot, Nao, interacted with three Pepper robots to perform a sequential audio-visual pattern recall task that required integrating multimodal information. Nao outsourced its decisions (i.e., action selections) to its robot partners to perform the task efficiently in terms of neural computational cost by applying reinforcement learning. During the interaction, Nao learned its partners' specific expertise, which allowed Nao to turn for guidance to the partner who has the expertise corresponding to the current task state. The cognitive processing of Nao included a multimodal auto-Associative memory that allowed the determination of the cost of perceptual processing (i.e., cognitive load) when processing audio-visual stimuli. In turn, the processing cost is converted into a reward signal by an internal reward generation module. In this setting, the learner robot Nao aims to minimize cognitive load by turning to the partner whose expertise corresponds to a given task state. Overall, the results indicate that the learner robot discovers the expertise of partners and exploits this information to execute its task with low neural computational cost or cognitive load

    Trustworthiness assessment in multimodal human-robot interaction based on cognitive load

    No full text
    In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment

    Forming robot trust in heterogeneous agents during a multimodal interactive game

    No full text
    This study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a 'cognitive load' or 'cost' to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners -i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot's ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot

    Expectancy effects in the EEG during joint and spontaneous word-by-word sentence production in German

    No full text
    Our aim in the present study is to measure neural correlates during spontaneous interactive sentence production. We present a novel approach using the word-by-word technique from improvisational theatre, in which two speakers jointly produce one sentence. This paradigm allows the assessment of behavioural aspects, such as turn-times, and electrophysiological responses, such as event-related-potentials (ERPs). Twenty-five participants constructed a cued but spontaneous four-word German sentence together with a confederate, taking turns for each word of the sentence. In 30% of the trials, the confederate uttered an unexpected gender-marked article. To complete the sentence in a meaningful way, the participant had to detect the violation and retrieve and utter a new fitting response. We found significant increases in response times after unexpected words and – despite allowing unscripted language production and naturally varying speech material – successfully detected significant N400 and P600 ERP effects for the unexpected word. The N400 EEG activity further significantly predicted the response time of the subsequent turn. Our results show that combining behavioural and neuroscientific measures of verbal interactions while retaining sufficient experimental control is possible, and that this combination provides promising insights into the mechanisms of spontaneous spoken dialogue
    corecore